准确的移动对象细分是自动驾驶的重要任务。它可以为许多下游任务提供有效的信息,例如避免碰撞,路径计划和静态地图构建。如何有效利用时空信息是3D激光雷达移动对象分割(LIDAR-MOS)的关键问题。在这项工作中,我们提出了一个新型的深神经网络,利用了时空信息和不同的LiDAR扫描表示方式,以提高LIDAR-MOS性能。具体而言,我们首先使用基于图像图像的双分支结构来分别处理可以从顺序的LiDAR扫描获得的空间和时间信息,然后使用运动引导的注意模块组合它们。我们还通过3D稀疏卷积使用点完善模块来融合LIDAR范围图像和点云表示的信息,并减少对象边界上的伪像。我们验证了我们提出的方法对Semantickitti的LiDAR-MOS基准的有效性。我们的方法在LiDar-Mos IOU方面大大优于最先进的方法。从设计的粗到精细体系结构中受益,我们的方法以传感器框架速率在线运行。我们方法的实现可作为开源可用:https://github.com/haomo-ai/motionseg3d。
translated by 谷歌翻译
最近,神经辐射场(NERF)正在彻底改变新型视图合成(NVS)的卓越性能。但是,NERF及其变体通常需要进行冗长的每场训练程序,其中将多层感知器(MLP)拟合到捕获的图像中。为了解决挑战,已经提出了体素网格表示,以显着加快训练的速度。但是,这些现有方法只能处理静态场景。如何开发有效,准确的动态视图合成方法仍然是一个开放的问题。将静态场景的方法扩展到动态场景并不简单,因为场景几何形状和外观随时间变化。在本文中,基于素素网格优化的最新进展,我们提出了一种快速变形的辐射场方法来处理动态场景。我们的方法由两个模块组成。第一个模块采用变形网格来存储3D动态功能,以及使用插值功能将观测空间中的3D点映射到规范空间的变形的轻巧MLP。第二个模块包含密度和颜色网格,以建模场景的几何形状和密度。明确对阻塞进行了建模,以进一步提高渲染质量。实验结果表明,我们的方法仅使用20分钟的训练就可以实现与D-NERF相当的性能,该训练比D-NERF快70倍以上,这清楚地证明了我们提出的方法的效率。
translated by 谷歌翻译
Preys in the wild evolve to be camouflaged to avoid being recognized by predators. In this way, camouflage acts as a key defence mechanism across species that is critical to survival. To detect and segment the whole scope of a camouflaged object, camouflaged object detection (COD) is introduced as a binary segmentation task, with the binary ground truth camouflage map indicating the exact regions of the camouflaged objects. In this paper, we revisit this task and argue that the binary segmentation setting fails to fully understand the concept of camouflage. We find that explicitly modeling the conspicuousness of camouflaged objects against their particular backgrounds can not only lead to a better understanding about camouflage, but also provide guidance to designing more sophisticated camouflage techniques. Furthermore, we observe that it is some specific parts of camouflaged objects that make them detectable by predators. With the above understanding about camouflaged objects, we present the first triple-task learning framework to simultaneously localize, segment, and rank camouflaged objects, indicating the conspicuousness level of camouflage. As no corresponding datasets exist for either the localization model or the ranking model, we generate localization maps with an eye tracker, which are then processed according to the instance level labels to generate our ranking-based training and testing dataset. We also contribute the largest COD testing set to comprehensively analyse performance of the COD models. Experimental results show that our triple-task learning framework achieves new state-of-the-art, leading to a more explainable COD network. Our code, data, and results are available at: \url{https://github.com/JingZhang617/COD-Rank-Localize-and-Segment}.
translated by 谷歌翻译
半监督视频对象分割(VOS)的任务已经大大提升,最先进的性能是通过密集的基于匹配的方法进行的。最近的方法利用时空存储器(STM)网络并学习从所有可用源检索相关信息,其中使用对象掩模的过去帧形成外部存储器,并且使用存储器中的掩码信息分段为查询作为查询的当前帧进行分割。然而,当形成存储器并执行匹配时,这些方法仅在忽略运动信息的同时利用外观信息。在本文中,我们倡导\ emph {motion信息}的返回,并提出了一个用于半监督VOS的运动不确定性感知框架(MUMET)。首先,我们提出了一种隐含的方法来学习相邻帧之间的空间对应,构建相关成本卷。在构建密集的对应期间处理遮挡和纹理区域的挑战性案例,我们将不确定性纳入密集匹配并实现运动不确定性感知特征表示。其次,我们介绍了运动感知的空间注意模块,以有效地融合了语义特征的运动功能。关于具有挑战性的基准的综合实验表明,\ TextBF {\ Textit {使用少量数据并将其与强大的动作信息组合可以带来显着的性能Boost}}。我们只使用Davis17达到$ \ Mathcal {} $培训{76.5 \%} $ \ mathcal {f} $培训,这显着优于低数据协议下的\ texit {sota}方法。 \ textit {代码将被释放。}
translated by 谷歌翻译
突出物体检测本质上是主观的,这意味着多个估计应与相同的输入图像相关。大多数现有的突出物体检测模型是在点对点估计学习管道的指向点之后确定的,使得它们无法估计预测分布。尽管存在基于潜在的变量模型的随机预测网络来模拟预测变体,但基于单个清洁显着注释的潜在空间在探索显着性的主观性质方面不太可靠,导致效率较低,导致显着性“发散建模”较少。给定多个显着注释,我们通过随机采样介绍一般的分歧建模策略,并将我们的策略应用于基于集合的框架和三个基于变量模型的解决方案。实验结果表明,我们的一般发散模型策略在探索显着性的主观性质方面效果。
translated by 谷歌翻译
在最近的文献中,在最近的文献中已经过度研究了不确定性估计,通常可以被归类为炼体不确定性和认知不确定性。在当前的炼拉内不确定性估计框架中,往往忽略了炼拉线性的不确定性是数据的固有属性,只能用一个无偏见的Oracle模型正确估计。由于在大多数情况下,Oracle模型无法访问,我们提出了一个新的采样和选择策略,在火车时间近似甲骨文模型以实现炼梯不确定性估计。此外,我们在基于双头的异源型梯级不确定性估计框架中显示了一种琐碎的解决方案,并引入了新的不确定性一致性损失,以避免它。对于认知不确定性估算,我们认为条件潜在变量模型中的内部变量是模拟预测分布的另一个认识性的不确定性,并探索了关于隐藏的真实模型的有限知识。我们验证了我们对密集预测任务的观察,即伪装对象检测。我们的研究结果表明,我们的解决方案实现了准确的确定性结果和可靠的不确定性估算。
translated by 谷歌翻译
现有的RGB-D显着性检测模型没有明确鼓励RGB和深度来实现有效的多模态学习。在本文中,我们通过互信息最小化介绍了一种新的多级级联学习框架,以“明确”模拟RGB图像和深度数据之间的多模态信息。具体地,我们首先将每个模式的特征映射到较低的维度特征向量,并采用互信息最小化作为常规器,以减少来自RGB的外观特征与来自深度的几何特征之间的冗余。然后,我们执行多级级联学习,在网络的每个阶段强加相互信息最小化约束。基准RGB-D显着数据集的广泛实验说明了我们框架的有效性。此外,为了繁荣发展该领域,我们贡献了最大(比NJU2K大7倍)数据集,其中包含具有高质量多边形/杂文/对象/ instance- / rank级注释的15,625图像对。基于这些丰富的标签,我们另外构建了具有强大基线的四个新基准,并观察了一些有趣的现象,可以激励未来的模型设计。源代码和数据集可在“https://github.com/jingzhang617/cascaded_rgbd_sod”中获得。
translated by 谷歌翻译
伪装的物体检测(COD)旨在将伪装的物体掩盖隐藏在环境中,这是由于伪装对象及其周围环境的类似外观而具有挑战性。生物学研究表明深度可以为伪装对象发现提供有用的对象本地化提示。在本文中,我们研究了伪装对象检测的深度贡献,其中利用现有的单目深度估计(MDE)方法产生深度图。由于MDE数据集和我们的COD数据集之间的域间隙,所生成的深度映射不足以直接使用。然后,我们介绍了两个解决方案,以避免嘈杂的深度地图从主导培训过程中。首先,我们介绍了辅助深度估计分支(“ADE”),旨在重新映射深度图。我们发现我们的“生成深度”情景特别需要“Ade”。其次,我们通过生成的对抗性网络引入多模态的信心感知损失函数,以对伪装对象检测的深度的贡献。我们对各种伪装对象检测数据集的广泛实验说明了现有的“传感器深度”的RGB-D分段技术与“生成深度”工作,我们提出的两个解决方案协同工作,实现了伪装对象检测的有效深度贡献探索。
translated by 谷歌翻译
Transformer, which originates from machine translation, is particularly powerful at modeling long-range dependencies. Currently, the transformer is making revolutionary progress in various vision tasks, leading to significant performance improvements compared with the convolutional neural network (CNN) based frameworks. In this paper, we conduct extensive research on exploiting the contributions of transformers for accurate and reliable salient object detection. For the former, we apply transformer to a deterministic model, and explain that the effective structure modeling and global context modeling abilities lead to its superior performance compared with the CNN based frameworks. For the latter, we observe that both CNN and transformer based frameworks suffer greatly from the over-confidence issue, where the models tend to generate wrong predictions with high confidence. To estimate the reliability degree of both CNN- and transformer-based frameworks, we further present a latent variable model, namely inferential generative adversarial network (iGAN), based on the generative adversarial network (GAN). The stochastic attribute of the latent variable makes it convenient to estimate the predictive uncertainty, serving as an auxiliary output to evaluate the reliability of model prediction. Different from the conventional GAN, which defines the distribution of the latent variable as fixed standard normal distribution $\mathcal{N}(0,\mathbf{I})$, the proposed iGAN infers the latent variable by gradient-based Markov Chain Monte Carlo (MCMC), namely Langevin dynamics, leading to an input-dependent latent variable model. We apply our proposed iGAN to both fully and weakly supervised salient object detection, and explain that iGAN within the transformer framework leads to both accurate and reliable salient object detection.
translated by 谷歌翻译
Existing deep learning based stereo matching methods either focus on achieving optimal performances on the target dataset while with poor generalization for other datasets or focus on handling the cross-domain generalization by suppressing the domain sensitive features which results in a significant sacrifice on the performance. To tackle these problems, we propose PCW-Net, a Pyramid Combination and Warping cost volume-based network to achieve good performance on both cross-domain generalization and stereo matching accuracy on various benchmarks. In particular, our PCW-Net is designed for two purposes. First, we construct combination volumes on the upper levels of the pyramid and develop a cost volume fusion module to integrate them for initial disparity estimation. Multi-scale receptive fields can be covered by fusing multi-scale combination volumes, thus, domain-invariant features can be extracted. Second, we construct the warping volume at the last level of the pyramid for disparity refinement. The proposed warping volume can narrow down the residue searching range from the initial disparity searching range to a fine-grained one, which can dramatically alleviate the difficulty of the network to find the correct residue in an unconstrained residue searching space. When training on synthetic datasets and generalizing to unseen real datasets, our method shows strong cross-domain generalization and outperforms existing state-of-the-arts with a large margin. After fine-tuning on the real datasets, our method ranks first on KITTI 2012, second on KITTI 2015, and first on the Argoverse among all published methods as of 7, March 2022. The code will be available at https://github.com/gallenszl/PCWNet.
translated by 谷歌翻译